End-to-End speech-to-speech translation (S2ST) is generally evaluated with text-based metrics. This means that generated speech has to be automatically transcribed, making the evaluation dependent on the availability and quality of automatic speech recognition (ASR) systems. In this paper, we propose a text-free evaluation metric for end-to-end S2ST, named BLASER, to avoid the dependency on ASR systems. BLASER leverages a multilingual multimodal encoder to directly encode the speech segments for source input, translation output and reference into a shared embedding space and computes a score of the translation quality that can be used as a proxy to human evaluation. To evaluate our approach, we construct training and evaluation sets from more than 40k human annotations covering seven language directions. The best results of BLASER are achieved by training with supervision from human rating scores. We show that when evaluated at the sentence level, BLASER correlates significantly better with human judgment compared to ASR-dependent metrics including ASR-SENTBLEU in all translation directions and ASR-COMET in five of them. Our analysis shows combining speech and text as inputs to BLASER does not increase the correlation with human scores, but best correlations are achieved when using speech, which motivates the goal of our research. Moreover, we show that using ASR for references is detrimental for text-based metrics.
translated by 谷歌翻译
自然语言处理(NLP)的最新突破是由接受大量纯文本训练的语言模型驱动的。尽管有力,但从文本资源中衍生监督仍然是一个悬而未决的问题。例如,预审计的语言模型通常会忽略文本数据中丰富的,自由的结构。在本论文中,我们描述了三种工作,试图使用自然出现的监督来改善神经模型的训练和评估。我们首先研究自我监督的训练损失,以帮助提高针对各种NLP任务的审慎语言模型的性能。具体而言,我们改变了句子预测损失,以使其更适合其他预处理损失和更具挑战性的解决。我们设计了一个中间的登录步骤,该步骤使用自我监督的训练来促进模型的交叉任务概括能力。然后,我们描述了利用维基百科和释义中的结构的方法。特别是,我们提出培训损失,以利用实体​​,话语 - 与索引相关的知识的超链接,文章结构和文章类别图。我们提出了一个框架,该框架使用释义对在句子表示中删除语义和语法。我们扩展了一个新的生成任务的框架,该任务可以用句子示例控制输出文本的语法。最后,我们讨论了针对建立具有挑战性的评估任务的文本资源的工作。我们通过使用各种粉丝限制的网站定义新任务来介绍三个数据集,包括长期数据对文本生成数据集,剧本摘要数据集和长格式的故事生成数据集。这些数据集具有独特的特征,为未来的任务设置提供了挑战。
translated by 谷歌翻译
后门学习是研究深神经网络(DNNS)脆弱性的一个新兴而重要的话题。在快速武器竞赛的地位上,正在连续或同时提出许多开创性的后门攻击和防御方法。但是,我们发现对新方法的评估通常是不可思议的,以验证其主张和实际绩效,这主要是由于快速发展,不同的环境以及实施和可重复性的困难。没有彻底的评估和比较,很难跟踪当前的进度并设计文献的未来发展路线图。为了减轻这一困境,我们建立了一个名为Backdoorbench的后门学习的全面基准。它由一个可扩展的基于模块化的代码库(当前包括8个最先进(SOTA)攻击和9种SOTA防御算法的实现),以及完整的后门学习的标准化协议。我们还基于5个模型和4个数据集,对9个防御措施的每对8次攻击进行全面评估,总共8,000对评估。我们从不同的角度进一步介绍了对这8,000次评估的不同角度,研究了对国防算法,中毒比率,模型和数据集对后门学习的影响。 \ url {https://backdoorbench.github.io}公开获得了Backdoorbench的所有代码和评估。
translated by 谷歌翻译
Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameterreduction techniques to lower memory consumption and increase the training speed of BERT (Devlin et al., 2019). Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at https://github.com/google-research/ALBERT. * Work done as an intern at Google Research, driving data processing and downstream task evaluations.
translated by 谷歌翻译
本文介绍了一个新颖的自我监督的细粒度对话评估框架(自我评估)。核心思想是建模转弯质量与整个对话质量之间的相关性。我们首先提出了一种新型的自动数据构建方法,该方法可以自动为任意对话数据分配细粒度的分数。然后,我们使用多层对比度学习模式训练\ textbf {self eval},有助于区分不同的分数水平。多个基准测试的实验结果表明,自我与人类评估高度一致,并且比最先进的模型更好。我们对本文的实验进行了详细的分析。我们的代码和数据将在GitHub上发布。
translated by 谷歌翻译
作者最近给出了$ n^{o(\ log \ log n)} $时间成员资格查询算法,用于在统一分布下正确学习决策树(Blanc等,2021)。此问题的先前最快算法以$ n^{o(\ log n)} $ time运行,这是Ehrenfeucht和Haussler(1989)的经典算法,这是无分配设置的经典算法。在本文中,我们强调了获得多项式时间算法的自然开放问题,讨论获得它的可能途径以及我们认为具有独立利益的状态中级里程碑。
translated by 谷歌翻译
我们提供了$ n ^ {o(\ log \ log n)} $ - 时间成员资格查询算法,用于在统一分布下统一分发的统一分布\ {\ pm 1 \} ^ n $。即使在可实现的设置中,上一个最快的运行时也是$ n ^ {o(\ log n)} $,这是ehrenfeucht和haussler的经典算法的结果。我们的算法与学习决策树的实用启发式分享了相似性,我们增加了额外的想法,以避免已知的这些启发式措施。为了分析我们的算法,我们证明了决策树的新结构结果,增强了O'Donnell,Saks,Schramm和Servedio的定理。虽然OSS定理表明每个决策树都有一个有影响力的变量,但我们展示了每个决策树如何“修剪”,以便产生的树中的每个变量都是有影响力的。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译